Concentration inequalities for functions of independent variables
نویسنده
چکیده
Following the entropy method this paper presents general concentration inequalities, which can be applied to combinatorial optimization and empirical processes. The inequalities give improved concentration results for optimal travelling salesmen tours, Steiner trees and the eigenvalues of random symmetric matrices. 1 Introduction Since its appearance in 1995 Talagrands convex distance inequality [17] has been very successful as a tool to prove concentration results for functions of independent variables in cases which were previously inaccessible, or could be handled only with great di¢ culties. The now classical applications (see McDiarmid [12] and Steele [15]) include concentration inequalities for con guration functions, such as the length of the longest increasing subsequence in a sample, or for geometrical constructions, such as the length of an optimal travelling salesman tour or an optimal Steiner tree. Another recently emerged technique to prove concentration results is the entropy method. Originating in the work of Leonard Gross on logarithmic Sobolev inequalities for Gaussian measures [6], the method has been developed and re ned by Ledoux, Bobkov, Massart, Boucheron, Lugosi, Rio, Bousquet and others ( see [8], [10], [11], [2], [3], etc) to become an important tool in the study of empirical processes and learning theory. In [2, Boucheron at al] a general theorem on con guration functions is presented, which improves on the results obtained from the convex distance inequality. In [3, Boucheron et al] more results of this type are given and a weak version of the convex distance inequality itself is derived. Technically the core of the entropy method is a tensorisation inequality bounding the entropy of an n variable function Z in terms of a sum of entropies
منابع مشابه
On the generalization of Trapezoid Inequality for functions of two variables with bounded variation and applications
In this paper, a generalization of trapezoid inequality for functions of two independent variables with bounded variation and some applications are given.
متن کاملMoment inequalities for functions of independent random variables
A general method for obtaining moment inequalities for functions of independent random variables is presented. It is a generalization of the entropy method which has been used to derive concentration inequalities for such functions [7], and is based on a generalized tensorization inequality due to Lata la and Oleszkiewicz [25]. The new inequalities prove to be a versatile tool in a wide range o...
متن کاملMoment Inequalities for Functions of Independent Random Variables by Stéphane Boucheron,1 Olivier Bousquet,
A general method for obtaining moment inequalities for functions of independent random variables is presented. It is a generalization of the entropy method which has been used to derive concentration inequalities for such functions [Boucheron, Lugosi and Massart Ann. Probab. 31 (2003) 1583–1614], and is based on a generalized tensorization inequality due to Latała and Oleszkiewicz [Lecture Note...
متن کاملA note on concentration of submodular functions
We survey a few concentration inequalities for submodular and fractionally subadditive functions of independent random variables, implied by the entropy method for self-bounding functions. The power of these concentration bounds is that they are dimension-free, in particular implying standard deviation O( √ E[f ]) rather than O( √ n) which can be obtained for any 1Lipschitz function of n variab...
متن کاملSOME PROBABILISTIC INEQUALITIES FOR FUZZY RANDOM VARIABLES
In this paper, the concepts of positive dependence and linearlypositive quadrant dependence are introduced for fuzzy random variables. Also,an inequality is obtained for partial sums of linearly positive quadrant depen-dent fuzzy random variables. Moreover, a weak law of large numbers is estab-lished for linearly positive quadrant dependent fuzzy random variables. Weextend some well known inequ...
متن کاملSome Probability Inequalities for Quadratic Forms of Negatively Dependent Subgaussian Random Variables
In this paper, we obtain the upper exponential bounds for the tail probabilities of the quadratic forms for negatively dependent subgaussian random variables. In particular the law of iterated logarithm for quadratic forms of independent subgaussian random variables is generalized to the case of negatively dependent subgaussian random variables.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Random Struct. Algorithms
دوره 29 شماره
صفحات -
تاریخ انتشار 2006